skip to main content


Search for: All records

Creators/Authors contains: "Walter, M."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    High‐Resolution Multi‐scale Modeling Frameworks (HR)—global climate models that embed separate, convection‐resolving models with high enough resolution to resolve boundary layer eddies—have exciting potential for investigating low cloud feedback dynamics due to reduced parameterization and ability for multidecadal throughput on modern computing hardware. However low clouds in past HR have suffered a stubborn problem of over‐entrainment due to an uncontrolled source of mixing across the marine subtropical inversion manifesting as stratocumulus dim biases in present‐day climate, limiting their scientific utility. We report new results showing that this over‐entrainment can be partly offset by using hyperviscosity and cloud droplet sedimentation. Hyperviscosity damps small‐scale momentum fluctuations associated with the formulation of the momentum solver of the embedded large eddy simulation. By considering the sedimentation process adjacent to default one‐moment microphysics in HR, condensed phase particles can be removed from the entrainment zone, which further reduces entrainment efficiency. The result is an HR that can produce more low clouds with a higher liquid water path and a reduced stratocumulus dim bias. Associated improvements in the explicitly simulated sub‐cloud eddy spectrum are observed. We report these sensitivities in multi‐week tests and then explore their operational potential alongside microphysical retuning in decadal simulations at operational 1.5° exterior resolution. The result is a new HR having desired improvements in the baseline present‐day low cloud climatology, and a reduced global mean bias and root mean squared error of absorbed shortwave radiation. We suggest it should be promising for examining low cloud feedbacks with minimal approximation.

     
    more » « less
  2. Abstract

    In this work, bottom‐up Al–Si–Al nanowire (NW) heterostructures are presented, which act as a prototype vehicle toward top‐down fabricated nanosheet (NS) and multi‐wire (MW) reconfigurable field‐effect transistors (RFETs). Evaluating the key parameters of these transistors regarding the on‐ and off‐currents as well as threshold voltages for n‐ and p‐type operation exhibit a high degree of symmetry. Most notably also a low device‐to‐device variability is achieved. In this respect, the investigated Al–Si material system reveals its relevance for reconfigurable logic cells obtained from Si NSs. To show the versatility of the proposed devices, this work reports on a combinational wired‐AND gate obtained from a multi‐gate RFET. Additionally, up‐scaling the current is achieved by realizing a MW RFET without compromising reconfigurability. The Al–Si–Al platform has substantial potential to enable complex adaptive and self‐learning combinational and sequential circuits with energy efficient and small footprint computing paradigms as well as for native components for hardware security circuits.

     
    more » « less
  3. Microplastic pollution is measured with a variety of sampling methods. Field experiments indicate that commonly used sampling methods, including net, pump, and grab samples, do not always result in equivalent measured concentration. We investigate the comparability of these methods through a meta-analysis of 121 surface water microplastic studies. We find systematic relationships between measured concentration and sampled volume, method of collection, mesh size used for filtration, and waterbody sampled. Most significantly, a strong log−linear relationship exists between sample volume and measured concentration, with small-volume grab samples measuring up to 104 particles/L higher concentrations than larger volume net samples, even when sampled concurrently. Potential biasing factors explored included filtration size (±102 particles/L), net volume overestimation (±101 particles/L), fiber loss through net mesh (unknown magnitude), intersample variability (±101 particles/L), and contamination, the potential factor with an effect large enough (±103 particles/L) to explain the observed differences. On the basis of these results, we caution against comparing concentrations across multiple studies or combining multiple study results to identify regional patterns. Additionally, we emphasize the importance of contamination reduction and quantification strategies, namely that blank samples from all stages of field sampling be collected and reported as a matter of course for all studies. 
    more » « less
  4. Denitrification in woodchip bioreactors (WBRs) treating agricultural drainage and runoff is frequently carbon-limited due to the recalcitrance of carbon (C) in lignocellulosic woodchip biomass. Recent research has shown that redox fluctuations, achieved through periodic draining and re-flooding of WBRs, can increase nitrate removal rates by enhancing the release of labile C during oxic periods. While dying–rewetting (DRW) cycles appear to hold great promise for improving the performance of denitrifying WBRs, redox fluctuations in nitrogen-rich environments are commonly associated with enhanced emissions of the greenhouse gas nitrous oxide (N 2 O) due to inhibition of N 2 O reduction in microaerophilic conditions. Here, we evaluate the effects of oxic–anoxic cycling associated with DRW on the quantity and quality of C mobilized from woodchips, nitrate removal rates, and N 2 O accumulation in a complementary set of flow-through and batch laboratory bioreactors at 20 °C. Redox fluctuations significantly increased nitrate removal rates from 4.8–7.2 g N m −3 d −1 in a continuously saturated (CS) reactor to 9.8–11.2 g N m −3 d −1 24 h after a reactor is drained and re-saturated. Results support the theory that DRW conditions lead to faster NO 3 − removal rates by increasing mobilization of labile organic C from woodchips, with lower aromaticity in the dissolved C pool of oxic–anoxic reactors highlighting the importance of lignin breakdown to overall carbon release. There was no evidence for greater N 2 O accumulation, measured as N 2 O product yields, in the DRW reactors compared to continuously saturated reactors. We propose that greater organic C availability for N 2 O reducers following oxic periods outweighs the effect of microaerophilic inhibition of N 2 O reduction in controlling N 2 O dynamics. Implications of these findings for optimizing DRW cycling to enhance nitrate removal rates in denitrifying WBRs are discussed. 
    more » « less
  5. Abstract

    Denitrifying woodchip bioreactors (WBRs) are increasingly used to manage the release of non‐point source nitrogen (N) by stimulating microbial denitrification. Woodchips serve as a renewable organic carbon (C) source, yet the recalcitrance of organic C in lignocellulosic biomass causes many WBRs to be C‐limited. Prior studies have observed that oxic–anoxic cycling increased the mobilization of organic C, increased nitrate (NO3) removal rates, and attenuated production of nitrous oxide (N2O). Here, we use multi‐omics approaches and amplicon sequencing of fungal 5.8S‐ITS2 and prokaryotic 16S rRNA genes to elucidate the microbial drivers for enhanced NO3removal and attenuated N2O production under redox‐dynamic conditions. Transient oxic periods stimulated the expression of fungal ligninolytic enzymes, increasing the bioavailability of woodchip‐derived C and stimulating the expression of denitrification genes. Nitrous oxide reductase (nosZ) genes were primarily clade II, and the ratio of clade II/clade InosZtranscripts during the oxic–anoxic transition was strongly correlated with the N2O yield. Analysis of metagenome‐assembled genomes revealed that many of the denitrifying microorganisms also have a genotypic ability to degrade complex polysaccharides like cellulose and hemicellulose, highlighting the adaptation of the WBR microbiome to the ecophysiological niche of the woodchip matrix.

     
    more » « less
  6. Abstract High-resolution infrared spectra of comet C/2014 Q2 Lovejoy were acquired with NIRSPEC at the W. M. Keck Observatory on two post-perihelion dates (UT 2015 February 2 and 3). H 2 O was measured simultaneously with CO, CH 3 OH, H 2 CO, CH 4 , C 2 H 6 , C 2 H 4 , C 2 H 2 , HCN, and NH 3 on both dates, and rotational temperatures, production rates, relative abundances, H 2 O ortho-to-para ratios, and spatial distributions in the coma were determined. The first detection of C 2 H 4 in a comet from ground-based observations is reported. Abundances relative to H 2 O for all species were found to be in the typical range compared with values for other comets in the overall population to date. There is evidence of variability in rotational temperatures and production rates on timescales that are small compared with the rotational period of the comet. Spatial distributions of volatiles in the coma suggest complex outgassing behavior. CH 3 OH, HCN, C 2 H 6 , and CH 4 spatial distributions in the coma are consistent with direct release from associated ices in the nucleus and are peaked in a more sunward direction compared with co-measured dust. H 2 O spatial profiles are clearly distinct from these other four species, likely due to a sizable coma contribution from icy grain sublimation. Spatial distributions for C 2 H 2 , H 2 CO, and NH 3 suggest substantial contributions from extended coma sources, providing further evidence for distinct origins and associations for these species in comets. CO shows a different spatial distribution compared with other volatiles, consistent with jet activity from discrete nucleus ice sources. 
    more » « less
  7. We consider the problem of computing succinct encodings of lists of generators for invariant rings for group actions. Mulmuley conjectured that there are always polynomial sized such encodings for invariant rings of SL_n(C)-representations. We provide simple examples that disprove this conjecture (under standard complexity assumptions). We develop a general framework, denoted algebraic circuit search problems, that captures many important problems in algebraic complexity and computational invariant theory. This framework encompasses various proof systems in proof complexity and some of the central problems in invariant theory as exposed by the Geometric Complexity Theory (GCT) program, including the aforementioned problem of computing succinct encodings for generators for invariant rings. 
    more » « less
  8. Abstract

    We design a new strategy to load‐balance high‐intensity sub‐grid atmospheric physics calculations restricted to a small fraction of a global climate simulation's domain. We show why the current parallel load balancing infrastructure of Community Earth System Model (CESM) and Energy Exascale Earth Model (E3SM) cannot efficiently handle this scenario at large core counts. As an example, we study an unusual configuration of the E3SM Multiscale Modeling Framework (MMF) that embeds a binary mixture of two separate cloud‐resolving model grid structures that is attractive for low cloud feedback studies. Less than a third of the planet uses high‐resolution (MMF‐HR; sub‐km horizontal grid spacing) relative to standard low‐resolution (MMF‐LR) cloud superparameterization elsewhere. To enable MMF runs with Multi‐Domain cloud resolving models (CRMs), our load balancing theory predicts the most efficient computational scale as a function of the high‐intensity work's relative overhead and its fractional coverage. The scheme successfully maximizes model throughput and minimizes model cost relative to precursor infrastructure, effectively by devoting the vast majority of the processor pool to operate on the few high‐intensity (and rate‐limiting) high‐resolution (HR) grid columns. Two examples prove the concept, showing that minor artifacts can be introduced near the HR/low‐resolution CRM grid transition boundary on idealized aquaplanets, but are minimal in operationally relevant real‐geography settings. As intended, within the high (low) resolution area, our Multi‐Domain CRM simulations exhibit cloud fraction and shortwave reflection convergent to standard baseline tests that use globally homogenous MMF‐LR and MMF‐HR. We suggest this approach can open up a range of creative multi‐resolution climate experiments without requiring unduly large allocations of computational resources.

     
    more » « less
  9. Population pharmacokinetic (PK) modeling has become a cornerstone of drug development and optimal patient dosing. This approach offers great benefits for datasets with sparse sampling, such as in pediatric patients, and can describe between-patient variability. While most current algorithms assume normal or log-normal distributions for PK parameters, we present a mathematically consistent nonparametric maximum likelihood (NPML) method for estimating multivariate mixing distributions without any assumption about the shape of the distribution. This approach can handle distributions with any shape for all PK parameters. It is shown in convexity theory that the NPML estimator is discrete, meaning that it has finite number of points with nonzero probability. In fact, there are at most N points where N is the number of observed subjects. The original infinite NPML problem then becomes the finite dimensional problem of finding the location and probability of the support points. In the simplest case, each point essentially represents the set of PK parameters for one patient. The probability of the points is found by a primal-dual interior-point method; the location of the support points is found by an adaptive grid method. Our method is able to handle high-dimensional and complex multivariate mixture models. An important application is discussed for the problem of population pharmacokinetics and a nontrivial example is treated. Our algorithm has been successfully applied in hundreds of published pharmacometric studies. In addition to population pharmacokinetics, this research also applies to empirical Bayes estimation and many other areas of applied mathematics. Thereby, this approach presents an important addition to the pharmacometric toolbox for drug development and optimal patient dosing. 
    more » « less
  10. Attribute-based encryption (ABE) is an advanced cryptographic tool and useful to build various types of access control systems. Toward the goal of making ABE more practical, we propose key-policy (KP) and ciphertext-policy (CP) ABE schemes, which first support unbounded sizes of attribute sets and policies with negation and multi-use of attributes, allow fast decryption, and are adaptively secure under a standard assumption, simultaneously. Our schemes are more expressive than previous schemes and efficient enough. To achieve the adaptive security along with the other properties, we refine the technique introduced by Kowalczyk and Wee (Eurocrypt’19) so that we can apply the technique more expressive ABE schemes. Furthermore, we also present a new proof technique that allows us to remove redundant elements used in their ABE schemes. We implement our schemes in 128-bit security level and present their benchmarks for an ordinary personal computer and smartphones. They show that all algorithms run in one second with the personal computer when they handle any policy or attribute set with one hundred attributes. [Note: this paper is not by the PI, but by Genise who was supported by the grant; support was acknowledged in this publication.] 
    more » « less